Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Improved image inpainting network incorporating supervised attention module and cross-stage feature fusion
Qiaoling HUANG, Bochuan ZHENG, Zicheng DING, Zedong WU
Journal of Computer Applications    2024, 44 (2): 572-579.   DOI: 10.11772/j.issn.1001-9081.2023020123
Abstract162)   HTML3)    PDF (4672KB)(97)       Save

Image inpainting techniques for non-regular missing regions are versatile but challenging. To address the problem that existing inpainting methods may produce artifacts, distorted structures, and blurred textures for high-resolution images, an improved image inpainting network, named Gconv_CS(Gated convolution based CSFF and SAM) incorporating Supervised Attention Module (SAM) and Cross-Stage Feature Fusion (CSFF) was proposed. In Gconv_CS, the SAM and CSFF were introduced to Cconv, a two-stage network model with gated convolution. SAM ensured the effectiveness of the incoming feature information to the next stage by providing a real image to supervise the output features of the previous stage. CSFF fused the features from the encoder-decoder of the previous stage and fed them to the encoder of the next stage to compensate for the loss of feature information in the previous stage. The experimental results show that, at a percentage of missing regions of 1% to 10%, compared with the baseline model Gconv, on CelebA-HQ dataset, Gconv_CS improved the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity index (SSIM) by 1.5% and 0.5% respectively, reduced Fréchet Inception Distance (FID) and L1 loss by 21.8% and 14.8% respectively; on Place2 dataset, the first two indicators increased by 26.7% and 0.8% respectively, and the latter two indicators decreased by 7.9% and 37.9% respectively. A good restoration effect was achieved when Gconv_CS was used to remove masks from a giant panda’s face.

Table and Figures | Reference | Related Articles | Metrics
Density peak clustering algorithm based on adaptive nearest neighbor parameters
Huanhuan ZHOU, Bochuan ZHENG, Zheng ZHANG, Qi ZHANG
Journal of Computer Applications    2022, 42 (5): 1464-1471.   DOI: 10.11772/j.issn.1001-9081.2021050753
Abstract261)   HTML14)    PDF (5873KB)(96)       Save

Aiming at the problem that the nearest neighbor parameters need to be set manually in density peak clustering algorithm based on shared nearest neighbor, a density peak clustering algorithm based on adaptive nearest neighbor parameters was proposed. Firstly, the proposed nearest neighbor parameter search algorithm was used to automatically obtain the nearest neighbor parameters. Then, the clustering centers were selected through the decision diagram. Finally, according to the proposed allocation strategy of representative points, all sample points were clustered through allocating the representative points and the non-representative points sequentially. The clustering results of the proposed algorithm was compared with those of the six algorithms such as Shared-Nearest-Neighbor-based Clustering by fast search and find of Density Peaks (SNN?DPC), Clustering by fast search and find of Density Peaks (DPC), Affinity Propagation (AP), Ordering Points To Identify the Clustering Structure (OPTICS), Density-Based Spatial Clustering of Applications with Noise (DBSCAN), and K-means on the synthetic datasets and UCI datasets. Experimental results show that, the proposed algorithm is better than the other six algorithms on the evaluation indicators such as Adjusted Mutual Information (AMI), Adjusted Rand Index (ARI) and Fowlkes and Mallows Index (FMI). The proposed algorithm can automatically obtain the effective nearest neighbor parameters, and can better allocate the sample points in the edge region of the cluster.

Table and Figures | Reference | Related Articles | Metrics
Wildlife object detection combined with solving method of long-tail data
Qianzhou CAI, Bochuan ZHENG, Xiangyin ZENG, Jin HOU
Journal of Computer Applications    2022, 42 (4): 1284-1291.   DOI: 10.11772/j.issn.1001-9081.2021071279
Abstract323)   HTML13)    PDF (4784KB)(106)       Save

Wild animal object detection based on infrared camera images is conducive to the research and protection of wild animals. Because of the large difference in the number of different species of wildlife, there is the long-tail data problem of uneven distribution of numbers of species in the wildlife dataset collected by infrared cameras. This problem affects the overall performance improvement of the object detection neural network models. In order to solve the problem of low accuracy of object detection caused by long-tail data of wild animals, a method based on two-stage learning and re-weighting to solve long-tail data was proposed, and the method was applied to wildlife object detection based on YOLOv4-Tiny. Firstly, a new wildlife dataset with obvious long-tail data characteristics was collected, labelled and constructed. Secondly, a two-stage method based on transfer learning was used to train the neural network. In the first stage, the classification loss function was trained without weighting. In the second stage, two improved re-weighting methods were proposed, and the weights obtained in the first stage were used as the pre-training weights for re-weighting training. Finally, the wildlife test set was used to tested. Experimental results showed that the proposed long-tail data solving method achieved 60.47% and 61.18% mAP (mean Average Precision) with cross-entropy loss function and focal loss function as classification loss respectively, which was 3.30 percentage points and 5.16 percentage points higher than that the no-weighting method under the two loss functions, and 2.14 percentage points higher than that of the proposed improved effective sample weighting method under focus loss function. It shows that the proposed method can improve the object detection performance of YOLOv4-Tiny network for wildlife datasets with long-tail data characteristics.

Table and Figures | Reference | Related Articles | Metrics
Sparse subspace clustering method based on random blocking
Qi ZHANG, Bochuan ZHENG, Zheng ZHANG, Huanhuan ZHOU
Journal of Computer Applications    2022, 42 (4): 1148-1154.   DOI: 10.11772/j.issn.1001-9081.2021071271
Abstract241)   HTML9)    PDF (734KB)(79)       Save

Aiming at the problem of big clustering error of the Sparse Subspace Clustering (SSC) methods, an SSC method based on random blocking was proposed. First, the original problem dataset was divided into several subsets randomly to construct several sub-problems. Then, after obtaining the coefficient matrices of several sub-problems by the sparse subspace Alternating Direction Method of Multipliers (ADMM) respectively, these coefficient matrices were expanded into coefficient matrices of the same size as the original problem and integrated into a coefficient matrix. Finally, a similarity matrix was calculated according to the coefficient matrix obtained by the integration, and the clustering result of the original problem was obtained by using the Spectral Clustering (SC) algorithm. The SSC method based on random blocking has the subspace clustering error reduced by 3.12 percentage points on average compared with the optional algorithm among SSC, Stochastic Sparse Subspace Clustering via Orthogonal Matching Pursuit with Consensus (S3COMP-C), scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit (SSCOMP), SC and K-Means algorithms, and has all the mutual information, Rand index and entropy significantly better than comparison algorithms. Experimental results show that the SSC method based on random blocking can significantly reduce subspace clustering error, and improve the clustering performance.

Table and Figures | Reference | Related Articles | Metrics